EFFICIENCY OF MATRIX ELEMENTS COMPUTATIONS ON PARALLEL SYSTEMS*
نویسندگان
چکیده
منابع مشابه
Parallel Matrix Computations
In this article we develop some algorithms and tools for solving matrix problems on parallel processing computers. Operations are synchronized through data-flow alone, which makes global synchronization unnecessary and enables the algorithms to be implemented on machines with very simple operating systems and communication protocols. As examples, zve present algorithms that form the main module...
متن کاملExploiting Locality on Parallel Sparse Matrix Computations
By now, irregular problems are di cult to parallelize in an automatic way because of their lack of regularity in data access patterns. Most times, programmers must hand-write a particular solution for each problem separately. In this paper we present two pseudo-regular distributions which can be applied to partition most problems achieving very good average case distributions. Also, we have des...
متن کاملSparse Matrix Computations on Parallel Processor Arrays
We investigate the balancing of distributed compressed storage of large sparse matrices on a massively parallel computer. For fast computation of matrix{vector and matrix{matrix products on a rectangular processor array with e cient communications along its rows and columns we require that the nonzero elements of each matrix row or column be distributed among the processors located within the s...
متن کاملDAME: an environment for preserving the efficiency of data-parallel computations on distributed systems
Data parallel programming is the most widely adopted paradigm for a large class of problems on traditional multicomputers (see SPMD programming model sidebar, p. 23). Nevertheless, it is a very hard task to preserve efficiency when this style is adopted on a cluster of heterogeneous nodes having nonuniform and time varying computational powers. Very popular packages, such as PVM and MPI,1,2 all...
متن کاملMatrix Distributed Processing: A set of C++ Tools for implementing generic lattice computations on parallel systems
We present a set of programming tools (classes and functions written in C++ and based on Message Passing Interface) for fast development of generic parallel (and non-parallel) lattice simulations. They are collectively called MDP 1.2. These programming tools include classes and algorithms for matrices, random number generators, distributed lattices (with arbitrary topology), fields and parallel...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Computational Methods in Science and Technology
سال: 2003
ISSN: 1505-0602
DOI: 10.12921/cmst.2003.09.01.137-145